home Today's News Magazine Archives Vendor Guide 2001 Search isdmag.com

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Event Calendar


Resources:
Resources and Seminars
Special Sections


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us





The New Testability Sins: Don't Atone, Avoid!

By L.T. Wang, Jaehee Lee, and Hsin-Po Wang
Posted  03/29/01, 01:14:44 PM EDT

There's good news and bad news about testability in the year 2001. The good news is that changes in circuit design techniques have erased some of the original seven deadly sins of designing a circuit for testability (see ISD Magazine, "The Seven Deadly Sins of Scan-Based Designs" August 1997, p. 50). The bad news is that some new potential sins have emerged coincident with today's advances in silicon technology.

The fault count (both human and chip related) is rising because, like the human race, microelectronic circuitry continues to become more complex, leaving the door open to devilishly new complications. Ignore the warning signs of increased fault count and you can get burned.

Two advances have complicated the situation in particular: the variety and large number of embedded memories facilitated by the submicron technologies and the rising importance of power dissipation as circuits continue to shrink. Take heart. Although these changes increase the opportunity of violating testability design rules, falling from grace is not a given. To avoid further testability transgressions during automated test pattern generation (ATPG) planning, pay special attention to the support needed for RAMs, the logic surrounding the RAMs, and gated clocks.

Memory shadow logic test

For the logic surrounding the RAMs, sometimes called shadow logic, there are generally four viable test approaches:

--The black box approach, in which users rely on functional patterns to detect faults in logic surrounding a memory (the shadow logic). Fault coverage is low, but there's no need to change a design physically. So there's no area overhead or timing penalty.

--The bypass approach, which introduces one MUX delay at Q outputs and cannot check internal memory, but detects all shadow logic faults. This is probably the easiest design-for-test (DFT) technique for detecting all faults surrounding the memory. For timing-critical RAM designs, however, this approach is usually impractical.

--Scannable RAM testing, in which scannable flip-flops surround the memory. This technique provides good shadow fault coverage, but at the expense of greater overhead. However, the scannable RAM technique also can be used to test memory cells. Here, the ATPG pattern basically tests the shadow logic and a special March (checkerboard) pattern tests the memory cells.

--Sequential RAM testing, in which the memory clock and write-enable signals are controlled directly through primary input pins. The memory clock signal must be independent of the system clock. This technique allows full-scan ATPG to place proper patterns at the input scan flip-flops of the shadow logic, propagates the values through the memory by writing them to the memory in one clock cycle, and reading them out with another clock cycle before the output values of the shadow logic are captured at the other side of the scan flip-flops.

The sequential RAM technique also provides good shadow fault coverage, but at the expense of separating the system and memory clocks and making the write-enable signal directly controllable from primary input. However, the technique avoids adding one MUX delay at Q outputs, as required by the bypass approach (see Figure 1).

Note that the bypass and sequential RAM testing of the memory cells consist of using write-around and write-through techniques, respectively. "Black boxing" the memory cells sometimes calls for multiplexing the embedded memory cells with physical input/output pins (called isolation) to increase the memory's fault coverage. The idea is to provide direct testing via external automatic test equipment.

With the bypass or sequential RAM test option, the memory block is effectively bypassed for full-chip test purposes. Buffers and multiplexers are placed either inside or around the memory to make it transparent from the standpoint of test generation and test execution.

Not combining isolation with black boxing could result in very low coverage for the memory cell portions of the chip. Because the driving-logic signals could become unobservable, fault coverage here is significantly reduced as well. With so many unobservable signals, ATPG algorithms will find it hard to come up with an acceptably high set of fault-coverage test patterns. Moreover, the additional multiplexers in the bypassing approach add gate delays in the data path, which could pose a performance problem, depending on how the memory cells are used.

Embedded memory test

There are three choices for embedded memory testing:

--Memory BIST: circuitry automatically generates the necessary address, data, and control signal information and checks the memory cell output data to ensure functionality.

--Functional testing: the application of functional patterns through primary input/output pins.

--Memory scan: the application of March patterns through scan flip-flops. A scan chain is used to scan in the marching 1s and 0s patterns and to check for all bit failures.

The memory built-in self test (BIST) approach is preferred when the memory exceeds about 16 Kbits; smaller memories usually can go with the memory scan or functional test technique. In most complex designs containing multiple large embedded memories, memory BIST can also provide diagnostics capability for laser repair or failure map analysis.

The BIST approach results in higher fault coverage and shorter test vector sets than other methods because the BIST circuitry can execute and evaluate the results cycle by cycle. There is no need to propagate the results to primary output pins. Importantly, the approach also permits at-speed testing of the embedded memory cells and, where multiple embedded memory cells exist in a large design, the sharing of the on-chip memory BIST controller.

The second option, functional test through isolation, multiplexes physical full-chip pins to permit direct access to the embedded memory cells. Instead of making the embedded memory cells virtually transparent, as was done with the black-box technique, the cells are made completely controllable and observable for testing via external ATE.

In practice, asserting the test-enable line causes the multiplexers to convert normal, functional physical chip inputs and outputs to memory address, data and control inputs, and data outputs. This method can provide very high-quality memory cell test coverage, depending on the memory architecture and the test vectors applied by the external ATE.

However, there must be enough convertible physical chip pins to handle the complete set of memory cell I/O pins when the chip is placed in the test mode. There are, of course, extra overall chip I/O gate delays associated with this multiplexing technique; it's normally not suitable for on-chip BIST functions.

To summarize, a MUX'ed architecture can be deployed for either memory-cell testing (using isolation) or shadow-logic testing (using bypassing). In the first, designers MUX all memory input/output/control/clocks to external pins, where an external memory tester takes over. In the second, for testing the shadow logic connecting to memory input/output, the memory's output is MUX'ed with the memory's input, that is, MUX (New_Q0, Q0, D0, TEST_MODE).

The third option is to test embedded memories with memory scan. It has been shown that the scannable memory configuration can detect stuck-at faults in the memory cells, address decoders, and read/write logic. It also can detect static, dynamic, and coupling faults between memory cells of adjacent words and static coupling faults between memory cells in the same word.

Unfortunately, the scannable method results in long test vector sets and a large number of clock cycles because the normally parallel address and data input/output information must be converted to a serial set of patterns. This method also doesn't provide anywhere near at-speed testing of the embedded memory cells and can result in unduly long run times for large memory arrays.

Low-power design

For low-power circuitry, the goal is to ensure that the ATPG tool understands the design and can generate the patterns needed to produce high fault coverage. For instance, one of the things often found in modern low-power designs is a gated-clock system, which saves power by working on a 50-percent duty cycle by turning off circuits when they are not functioning. Here, designers basically arrange to AND the clock with an enable signal so as to latch data on a falling edge (see Figure 2).

ATPG and scan tools must recognize such an arrangement and produce the appropriate test patterns without asking the user to install additional OR gates in front of the AND gate to force active clocking. In other words, during a test mode, no clock enabling is required, which would violate the low-power operation.


L. T. Wang is founder and president of SynTest Technologies (Sunnyvale, CA). Jaehee Lee and Hsin-Po Wang are both at SynTest, Lee as DFT manager and Wang as director of engineering.


Revisiting the Seven Deadly Sins of Scan-Based Designs

In no particular order, the original seven deadly sins of scan-based design are: 1) the use of SR latches, 2) D latches, 3) combinational feedback loops, 4) gated clocks, 5) derived clocks, 6) sequentially controlled asynchronous set or reset, and 7) bus contentions. These days, feedback loops are mostly passe, and the use of SR latches, one-hot logic, pulse generators, and perhaps other circuit techniques are not common. The other techniques still must be eliminated or limited in use, if high fault coverage is to be achieved.

Including the above structures in a scan-based design can prevent the scan chain from properly shifting patterns in and can make it virtually impossible to generate test patterns. However, sometimes the structure can't be avoided and undesirable work-arounds will be necessary unless the right ATPG tool is available.

For example, designers often use a gated clock to cut power consumption by temporarily turning off part of a circuit when it is not in use. A gated clock is generated from an external clock and goes through at least one combinational gate, as well as buffers and inverters.

Unfortunately, the clocks of these flip-flops can't be controlled from primary inputs, making it impossible to scan in data.

Removing the flip-flops from the scan chain is one solution, but doing so often results in a loss of fault coverage.

If you can't omit gated clocks from a design, until recently you had two options: Add an OR gate at the Enable pin so that the external clock can be connected to the flip-flop's clock input, or multiplex the data with the flip-flops output. Either option is "sinful" because of the area or timing penalties imposed.

   Print Print this story     e-mail Send as e-mail   Back Home

Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.